Risks of AI in Legal Practice

While generative AI and other automation tools offer great potential for making a wide range of legal work faster, more capable, and more efficient, current legal AI systems suffer from serious shortcomings that limit their effectiveness and create risks for using them in real-world legal practice or teaching.

 

Overviews and Summaries of Risks

  • Harvard Law School Center on the Legal Profession: “Ethical Prompts: Professionalism, ethics, and ChatGPT” The Practice — March/April 2023 (discussing practical and philosophical concerns with the use of AI, including an increasing disconnect between who makes AI technolocy and who uses is, resulting in “more intermediaries and less understanding,” and privatization and concentration of legal AI systems makers).

Hallucinations/Non-Existent Cases or Other Cites

  • Stanford HAI: Dan Ho, et al.: “AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries” (evaluating claims by LexisNexis (creator of Lexis+ AI) and Thomson Reuters (creator of Westlaw AI-Assisted Research and Ask Practical Law AI) that their use of retrieval-augmented generation (RAG) helps significantly “’avoid’ hallucinations and guarantee ‘hallucination-free’ legal citations.” The study found that , while RAG systems “do reduce errors compared to general-purpose AI models like GPT-4 [–] a substantial improvement [–] these bespoke legal AI tools still hallucinate an alarming amount of the time: the Lexis+ AI and Ask Practical Law AI systems produced incorrect information more than 17% of the time, while Westlaw’s AI-Assisted Research hallucinated more than 34% of the time.”)
  • Reece Rogers, Wired: “Reduce AI Hallucinations With This Neat Software Trick [RAG]” June 24, 2024 (discussing the study above and related efforts to control hallucinations)
  • Stephen Council, “Stanford expert on ‘lying and technology’ accused of lying about technology,” SFGate, November 22, 2024

Inaccurate or Poor Quality Summaries of Data, Transcripts, or Other Materials

 

Perpetuation of Bias and Discrimination

  • arXiv — Amit Haim, Alejandro Salinas, Julian Nyarko: “What’s in a Name? Auditing Large Language Models for Race and Gender Bias” From the abstract: “In our study, we prompt the [LLM] models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. . . . Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities.
  • Monica Schreiber, “Rethinking Algorithmic Decision Making,” (summarizing Julian Nyarko, et al., “Designing Equitable Algorithms,”” paper), July 23, 2023
  • Politico,”Why Your Chatbot’s So Racist,” 12/12/2023  (report on study showing racial bias in AI models used in healthcare) 
    • Underlying study (from the abstract: ‘LLM models used in healthcare “may recapitulate harmful, race-based . . . content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. . . . This study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas.“)
  • Meredith Broussard, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (2023) (The word ‘glitch’ implies an incidental error, as easy to patch up as it is to identify. But what if racism, sexism, and ableism aren’t just bugs in mostly functional machinery—what if they’re coded into the system itself? . . . [This book] demonstrates how neutrality in tech is a myth and why algorithms need to be held accountable.”)
  • Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (2019) (from the publisher: “From everyday apps to complex algorithms, Ruha Benjamin cuts through tech-industry hype to understand how emerging technologies can reinforce White supremacy and deepen social inequity. Benjamin argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the “New Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite.”)

Inadvertent Disclosure of Confidential Information/Waiver of Privilege

  • Bloomberg Law: “NY Bar Warns Attorneys of Privacy Risks Posed by AI Tools,”April 8. 2024
  • Bloomberg: “Generative AI Use Poses Threats to Attorney Client Privilege” Bloomberg January 23, 2024

Poor Quality Drafting of Briefs, Arguments, and Documents

Increasing the Access to Justice Gap/Inequalities in Representation

  • As legal tech expert Bob Ambrogi describes the concern: “Current generative AI tools are expensive to use. That raises the concern that only those with deep pockets — big firms and big corporations — will have access to them, while pro se individuals, smaller firms, and legal aid organizations will be shut out. Given the potential power of generative AI, this could further exacerbate inequality in the delivery of justice. One possible answer: public AI models not owned or controlled by any single corporation.”

Reduction in the Quality of Training and Opportunities for Newer Lawyers

  • Justin Henry, The American Lawyer: “We Asked Every Am Law 100 Law Firm How They’re Using Gen AI. Here’s What We Learned” January 29, 2024 (“generative AI’s ability to perform work that traditionally served as a training for junior associates could disrupt young lawyers’ professional development. ‘That lower-level work provides an almost endless supply of hypotheticals,’ Almon said. ‘You might have to consider over and over many examples of attorney-client privilege, how it’s applied and when, and applying evidentiary rules. . . So how do those junior lawyers learn the nuances of privilege application? I think that’s a solvable problem, but it will require a great deal of training and education.‘”)

Lawyer Liability for AI Errors, Client Misunderstandings and Lack of Consent, etc.